Current Issue : October-December Volume : 2024 Issue Number : 4 Articles : 5 Articles
Tomato cultivation is increasingly widespread, yet it faces significant challenges, particularly from plant diseases caused by fungi, bacteria, and insects. Addressing these diseases is crucial for ensuring the quality and yield of tomato crops. To support specialists in accurately identifying and managing these diseases, we propose an advanced automatic system for detecting and identifying tomato leaf diseases using sophisticated image processing techniques. Therefore, we proposed an approach that employs robust feature extraction methods, including the gray level co-occurrence matrix (GLCM) and scale-invariant feature transform (SIFT), coupled with a support vector machine (SVM) for adequate classification. We curated an extensive dataset of 2700 tomato leaf images, with a minimum of 300 images for each of the nine distinct disease classes. This comprehensive dataset facilitated the training and testing of various machine learning and deep learning models. The experimental results highlight our proposed approach’s exceptional accuracy and reliability, significantly improving the detection and classification of tomato leaf diseases. A thorough comparative analysis with contemporary state-of-the-art techniques further validates the superiority of our system. Our findings suggest that this framework can significantly benefit tomato cultivation by enabling timely and precise disease management. Future research can explore integrating advanced deep learning algorithms to enhance the system’s accuracy in this multiclass classification challenge....
Mapping neural connections within the brain has been a fundamental goal in neuroscience to understand better its functions and changes that follow aging and diseases. Developments in imaging technology, such as microscopy and labeling tools, have allowed researchers to visualize this connectivity through high-resolution brain-wide imaging. With this, image processing and analysis have become more crucial. However, despite the wealth of neural images generated, access to an integrated image processing and analysis pipeline to process these data is challenging due to scattered information on available tools and methods. To map the neural connections, registration to atlases and feature extraction through segmentation and signal detection are necessary. In this review, our goal is to provide an updated overview of recent advances in these image-processing methods, with a particular focus on fluorescent images of the mouse brain. Our goal is to outline a pathway toward an integrated image-processing pipeline tailored for connecto-informatics. An integrated workflow of these image processing will facilitate researchers’ approach to mapping brain connectivity to better understand complex brain networks and their underlying brain functions. By highlighting the image-processing tools available for fluroscent imaging of the mouse brain, this review will contribute to a deeper grasp of connecto-informatics, paving the way for better comprehension of brain connectivity and its implications....
As the next generation video coding standard, Versatile Video Coding (VVC) significantly improves coding efficiency over the current High-Efficiency Video Coding (HEVC) standard. In practice, this improvement comes at the cost of increased pre-processing complexity. This increased complexity faces the challenge of implementing VVC for time-consuming encoding. This work presents a technique to simplify VVC intra-prediction using Discrete Cosine Transform (DCT) feature analysis and a concatenate-designed CNN. The coefficients of the (DTC-)transformed CUs reflect the complexity of the original texture, and the proposed CNN employs multiple classifiers to predict whether they should be split. This approach can determine whether to split Coding Units (CUs) of different sizes according to the Versatile Video Coding (VVC) standard. This helps to simplify the intra-prediction process. The experimental results indicate that our approach can reduce the encoding time by 52.77% with a minimal increase of 1.48%. We use the Bjøntegaard Delta Bit Rate (BDBR) compared to the original algorithm, demonstrating a competitive result with other state-of-the-art methods in terms of coding efficiency with video quality....
In recent years, deep learning has become a core technology in many fields such as computer vision. The parallel processing capability of GPU, greatly accelerates the training and inference of deep learni field of image processing and generation. This paper discusses the cooperation and differences between deep learning and traditional computer vision technology and focuses on the significant advantages of GPU in medical image processing applications such as image reconstruction, filter enhancement, image registration, matching efficiency and quality of image processing, but also promotes the accuracy and speed of medical diagno development of deep learning and GPU optimi mate-rial. If material is not included in statutory regulation orexceeds the permitted use, you will need Vol.5, Issue 01, June, 2024 , Li2, Huixiang Li3, Yadong Shi4, Xiaoan Zhan Computer Business Information Computer Shanghai,China Electrical New York University, NY, USA E-mail:lihuixiang95@gmail.com ABSTRACT learning models, especially in the rocessing matching, and fusion. This convergence not only improves the diagnosis, and looks forward to the future application and optimization in various industries....
The present study deals with image enhancement, which is a very common problem in image processing. This issue has been addressed in multiple works with different methods, most with the sole purpose of improving the perceived quality. Our goal is to propose an approach with a strong physical justification that can model the human visual system. This is why the Logarithmic Image Processing (LIP) framework was chosen. Within this model, initially dedicated to images acquired in transmission, it is possible to introduce the novel concept of negative grey levels, interpreted as light intensifiers. Such an approach permits the extension of the dynamic range of a low-light image to the full grey scale in “real-time”, which means at camera speed. In addition, this method is easily generalizable to colour images and is reversible, i.e., bijective in the mathematical sense, and can be applied to images acquired in reflection thanks to the consistency of the LIP framework with human vision. Various application examples are presented, as well as prospects for extending this work....
Loading....